Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
1.
Neuron ; 111(4): 454-469, 2023 02 15.
Artigo em Inglês | MEDLINE | ID: mdl-36640765

RESUMO

Replay in the brain has been viewed as rehearsal or, more recently, as sampling from a transition model. Here, we propose a new hypothesis: that replay is able to implement a form of compositional computation where entities are assembled into relationally bound structures to derive qualitatively new knowledge. This idea builds on recent advances in neuroscience, which indicate that the hippocampus flexibly binds objects to generalizable roles and that replay strings these role-bound objects into compound statements. We suggest experiments to test our hypothesis, and we end by noting the implications for AI systems which lack the human ability to radically generalize past experience to solve new problems.


Assuntos
Hipocampo , Aprendizagem , Humanos , Encéfalo , Potenciais de Ação
2.
Nat Commun ; 10(1): 5489, 2019 12 02.
Artigo em Inglês | MEDLINE | ID: mdl-31792198

RESUMO

Advances in artificial intelligence are stimulating interest in neuroscience. However, most attention is given to discrete tasks with simple action spaces, such as board games and classic video games. Less discussed in neuroscience are parallel advances in "synthetic motor control". While motor neuroscience has recently focused on optimization of single, simple movements, AI has progressed to the generation of rich, diverse motor behaviors across multiple tasks, at humanoid scale. It is becoming clear that specific, well-motivated hierarchical design elements repeatedly arise when engineering these flexible control systems. We review these core principles of hierarchical control, relate them to hierarchy in the nervous system, and highlight research themes that we anticipate will be critical in solving challenges at this disciplinary intersection.


Assuntos
Aprendizado Profundo , Mamíferos/fisiologia , Animais , Inteligência Artificial , Humanos , Atividade Motora , Neurociências
3.
Nat Commun ; 10(1): 5223, 2019 11 19.
Artigo em Inglês | MEDLINE | ID: mdl-31745075

RESUMO

Humans prolifically engage in mental time travel. We dwell on past actions and experience satisfaction or regret. More than storytelling, these recollections change how we act in the future and endow us with a computationally important ability to link actions and consequences across spans of time, which helps address the problem of long-term credit assignment: the question of how to evaluate the utility of actions within a long-duration behavioral sequence. Existing approaches to credit assignment in AI cannot solve tasks with long delays between actions and consequences. Here, we introduce a paradigm where agents use recall of specific memories to credit past actions, allowing them to solve problems that are intractable for existing algorithms. This paradigm broadens the scope of problems that can be investigated in AI and offers a mechanistic account of behaviors that may inspire models in neuroscience, psychology, and behavioral economics.


Assuntos
Algoritmos , Processos Mentais/fisiologia , Modelos Psicológicos , Reforço Psicológico , Transferência de Experiência/fisiologia , Inteligência Artificial , Humanos , Aprendizagem/fisiologia , Resolução de Problemas/fisiologia
4.
Nat Neurosci ; 22(11): 1761-1770, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-31659335

RESUMO

Systems neuroscience seeks explanations for how the brain implements a wide variety of perceptual, cognitive and motor tasks. Conversely, artificial intelligence attempts to design computational systems based on the tasks they will have to solve. In artificial neural networks, the three components specified by design are the objective functions, the learning rules and the architectures. With the growing success of deep learning, which utilizes brain-inspired architectures, these three designed components have increasingly become central to how we model, engineer and optimize complex artificial learning systems. Here we argue that a greater focus on these components would also benefit systems neuroscience. We give examples of how this optimization-based framework can drive theoretical and experimental progress in neuroscience. We contend that this principled perspective on systems neuroscience will help to generate more rapid progress.


Assuntos
Inteligência Artificial , Aprendizado Profundo , Redes Neurais de Computação , Animais , Encéfalo/fisiologia , Humanos
5.
Nature ; 557(7705): 429-433, 2018 05.
Artigo em Inglês | MEDLINE | ID: mdl-29743670

RESUMO

Deep neural networks have achieved impressive successes in fields ranging from object recognition to complex games such as Go1,2. Navigation, however, remains a substantial challenge for artificial agents, with deep neural networks trained by reinforcement learning3-5 failing to rival the proficiency of mammalian spatial behaviour, which is underpinned by grid cells in the entorhinal cortex 6 . Grid cells are thought to provide a multi-scale periodic representation that functions as a metric for coding space7,8 and is critical for integrating self-motion (path integration)6,7,9 and planning direct trajectories to goals (vector-based navigation)7,10,11. Here we set out to leverage the computational functions of grid cells to develop a deep reinforcement learning agent with mammal-like navigational abilities. We first trained a recurrent network to perform path integration, leading to the emergence of representations resembling grid cells, as well as other entorhinal cell types 12 . We then showed that this representation provided an effective basis for an agent to locate goals in challenging, unfamiliar, and changeable environments-optimizing the primary objective of navigation through deep reinforcement learning. The performance of agents endowed with grid-like representations surpassed that of an expert human and comparison agents, with the metric quantities necessary for vector-based navigation derived from grid-like units within the network. Furthermore, grid-like representations enabled agents to conduct shortcut behaviours reminiscent of those performed by mammals. Our findings show that emergent grid-like representations furnish agents with a Euclidean spatial metric and associated vector operations, providing a foundation for proficient navigation. As such, our results support neuroscientific theories that see grid cells as critical for vector-based navigation7,10,11, demonstrating that the latter can be combined with path-based strategies to support navigation in challenging environments.


Assuntos
Biomimética/métodos , Aprendizado de Máquina , Redes Neurais de Computação , Navegação Espacial , Animais , Córtex Entorrinal/citologia , Córtex Entorrinal/fisiologia , Meio Ambiente , Células de Grade/fisiologia , Humanos
6.
Behav Brain Sci ; 40: e255, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-29342685

RESUMO

We agree with Lake and colleagues on their list of "key ingredients" for building human-like intelligence, including the idea that model-based reasoning is essential. However, we favor an approach that centers on one additional ingredient: autonomy. In particular, we aim toward agents that can both build and exploit their own internal models, with minimal human hand engineering. We believe an approach centered on autonomous learning has the greatest chance of success as we scale toward real-world complexity, tackling domains for which ready-made formal models are not available. Here, we survey several important examples of the progress that has been made toward building autonomous agents with human-like abilities, and highlight some outstanding challenges.


Assuntos
Aprendizagem , Pensamento , Humanos , Resolução de Problemas
7.
Behav Brain Sci ; 40: e272, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-29342693

RESUMO

Lake et al. suggest that current AI systems lack the inductive biases that enable human learning. However, Lake et al.'s proposed biases may not directly map onto mechanisms in the developing brain. A convergence of fields may soon create a correspondence between biological neural circuits and optimization in structured architectures, allowing us to systematically dissect how brains learn.


Assuntos
Cognição , Aprendizagem , Encéfalo , Humanos , Pensamento
8.
Nature ; 538(7626): 471-476, 2016 10 27.
Artigo em Inglês | MEDLINE | ID: mdl-27732574

RESUMO

Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read-write memory.

9.
Front Comput Neurosci ; 10: 94, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27683554

RESUMO

Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives. First, structured architectures are used, including dedicated systems for attention, recursion and various forms of short- and long-term memory storage. Second, cost functions and training procedures have become more complex and are varied across layers and over time. Here we think about the brain in terms of these ideas. We hypothesize that (1) the brain optimizes cost functions, (2) the cost functions are diverse and differ across brain locations and over development, and (3) optimization operates within a pre-structured architecture matched to the computational problems posed by behavior. In support of these hypotheses, we argue that a range of implementations of credit assignment through multiple layers of neurons are compatible with our current knowledge of neural circuitry, and that the brain's specialized systems can be interpreted as enabling efficient optimization for specific problem classes. Such a heterogeneously optimized system, enabled by a series of interacting cost functions, serves to make learning data-efficient and precisely targeted to the needs of the organism. We suggest directions by which neuroscience could seek to refine and test these hypotheses.

10.
Neural Comput ; 26(10): 2163-93, 2014 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-25058706

RESUMO

We propose and develop a hierarchical approach to network control of complex tasks. In this approach, a low-level controller directs the activity of a "plant," the system that performs the task. However, the low-level controller may be able to solve only fairly simple problems involving the plant. To accomplish more complex tasks, we introduce a higher-level controller that controls the lower-level controller. We use this system to direct an articulated truck to a specified location through an environment filled with static or moving obstacles. The final system consists of networks that have memorized associations between the sensory data they receive and the commands they issue. These networks are trained on a set of optimal associations generated by minimizing cost functions. Cost function minimization requires predicting the consequences of sequences of commands, which is achieved by constructing forward models, including a model of the lower-level controller. The forward models and cost minimization are used only during training, allowing the trained networks to respond rapidly. In general, the hierarchical approach can be extended to larger numbers of levels, dividing complex tasks into more manageable subtasks. The optimization procedure and the construction of the forward models and controllers can be performed in similar ways at each level of the hierarchy, which allows the system to be modified to perform other tasks or to be extended for more complex tasks without retraining lower-levels.


Assuntos
Encéfalo/fisiologia , Modelos Neurológicos , Rede Nervosa/fisiologia , Redes Neurais de Computação , Simulação por Computador , Humanos
11.
Nat Neurosci ; 17(3): 416-22, 2014 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-24531306

RESUMO

Mormyrid electric fish are a model system for understanding how neural circuits predict the sensory consequences of motor acts. Medium ganglion cells in the electrosensory lobe create negative images that predict sensory input resulting from the fish's electric organ discharge (EOD). Previous studies have shown that negative images can be created through plasticity at granule cell-medium ganglion cell synapses, provided that granule cell responses to the brief EOD command are sufficiently varied and prolonged. Here we show that granule cells indeed provide such a temporal basis and that it is well-matched to the temporal structure of self-generated sensory inputs, allowing rapid and accurate sensory cancellation and explaining paradoxical features of negative images. We also demonstrate an unexpected and critical role of unipolar brush cells (UBCs) in generating the required delayed responses. These results provide a mechanistic account of how copies of motor commands are transformed into sensory predictions.


Assuntos
Peixe Elétrico/fisiologia , Órgão Elétrico/fisiologia , Neurônios/fisiologia , Sensação/fisiologia , Potenciais de Ação/fisiologia , Animais , Encéfalo/citologia , Encéfalo/fisiologia , Eletrofisiologia/instrumentação , Eletrofisiologia/métodos , Microeletrodos , Fatores de Tempo
12.
Sci Am ; 305(6): 21, 2011 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-22214121
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA